247 research outputs found

    DSMNet: Deep High-precision 3D Surface Modeling from Sparse Point Cloud Frames

    Full text link
    Existing point cloud modeling datasets primarily express the modeling precision by pose or trajectory precision rather than the point cloud modeling effect itself. Under this demand, we first independently construct a set of LiDAR system with an optical stage, and then we build a HPMB dataset based on the constructed LiDAR system, a High-Precision, Multi-Beam, real-world dataset. Second, we propose an modeling evaluation method based on HPMB for object-level modeling to overcome this limitation. In addition, the existing point cloud modeling methods tend to generate continuous skeletons of the global environment, hence lacking attention to the shape of complex objects. To tackle this challenge, we propose a novel learning-based joint framework, DSMNet, for high-precision 3D surface modeling from sparse point cloud frames. DSMNet comprises density-aware Point Cloud Registration (PCR) and geometry-aware Point Cloud Sampling (PCS) to effectively learn the implicit structure feature of sparse point clouds. Extensive experiments demonstrate that DSMNet outperforms the state-of-the-art methods in PCS and PCR on Multi-View Partial Point Cloud (MVP) database. Furthermore, the experiments on the open source KITTI and our proposed HPMB datasets show that DSMNet can be generalized as a post-processing of Simultaneous Localization And Mapping (SLAM), thereby improving modeling precision in environments with sparse point clouds.Comment: To be published in IEEE Geoscience and Remote Sensing Letters (GRSL

    Efficient Deep Reinforcement Learning via Adaptive Policy Transfer

    Full text link
    Transfer Learning (TL) has shown great potential to accelerate Reinforcement Learning (RL) by leveraging prior knowledge from past learned policies of relevant tasks. Existing transfer approaches either explicitly computes the similarity between tasks or select appropriate source policies to provide guided explorations for the target task. However, how to directly optimize the target policy by alternatively utilizing knowledge from appropriate source policies without explicitly measuring the similarity is currently missing. In this paper, we propose a novel Policy Transfer Framework (PTF) to accelerate RL by taking advantage of this idea. Our framework learns when and which source policy is the best to reuse for the target policy and when to terminate it by modeling multi-policy transfer as the option learning problem. PTF can be easily combined with existing deep RL approaches. Experimental results show it significantly accelerates the learning process and surpasses state-of-the-art policy transfer methods in terms of learning efficiency and final performance in both discrete and continuous action spaces.Comment: Accepted by IJCAI'202

    TIPE2 regulates periodontal inflammation by inhibiting NF-κB p65 phosphorylation

    Get PDF
    The roles and molecular mechanisms of tumor necrosis factor-α-induced protein 8-like 2 (TIPE2) in periodontitis remain largely unknown. Objective: This study aimed to determine the expression of TIPE2 and NF-κB p65 in rat Porphyromonas gingivalis-induced periodontics in vivo. Methodology: Periodontal inflammation and alveolar bone resorption were analyzed using western blotting, micro-computed tomography, TRAP staining, immunohistochemistry, and immunofluorescence. THP-1 monocytes were stimulated using 1 μg/ml Pg. lipopolysaccharide (Pg.LPS) to determine the expression of TIPE2 in vitro. TIPE2 mRNA was suppressed by siRNA transfection, and the transfection efficiency was proven using western blotting and real-time PCR. The NF-κB pathway was activated by treating the cells with 1 μg/ml Pg.LPS to explore related mechanisms. Results: The expression of both TIPE2 and NF-κB p65 was increased in the gingival tissues of rat periodontitis compared with normal tissues. Positive expression of TIPE2 was distributed in inflammatory infiltrating cells and osteoclasts in the marginal lacunae of the alveolar bone. However, strong positive expression of TIPE2 in THP-1 was downregulated after Pg.LPS stimulation. TIPE2 levels negatively correlated with TNF-α and IL-1β. Decreased TIPE2 in THP-1 further promoted NF-κB p65 phosphorylation. Mechanistically, TIPE2 knockdown upregulated NF-κB signaling pathway activity. Conclusions: Taken together, these findings demonstrate that TIPE2 knockdown aggravates periodontal inflammatory infiltration via NF-κB pathway. Interventions aimed at increasing TIPE2 may help in the therapeutic applications for periodontitis

    Rethinking Noisy Label Learning in Real-world Annotation Scenarios from the Noise-type Perspective

    Full text link
    We investigate the problem of learning with noisy labels in real-world annotation scenarios, where noise can be categorized into two types: factual noise and ambiguity noise. To better distinguish these noise types and utilize their semantics, we propose a novel sample selection-based approach for noisy label learning, called Proto-semi. Proto-semi initially divides all samples into the confident and unconfident datasets via warm-up. By leveraging the confident dataset, prototype vectors are constructed to capture class characteristics. Subsequently, the distances between the unconfident samples and the prototype vectors are calculated to facilitate noise classification. Based on these distances, the labels are either corrected or retained, resulting in the refinement of the confident and unconfident datasets. Finally, we introduce a semi-supervised learning method to enhance training. Empirical evaluations on a real-world annotated dataset substantiate the robustness of Proto-semi in handling the problem of learning from noisy labels. Meanwhile, the prototype-based repartitioning strategy is shown to be effective in mitigating the adverse impact of label noise. Our code and data are available at https://github.com/fuxiAIlab/ProtoSemi

    Easy and Efficient Transformer : Scalable Inference Solution For large NLP model

    Full text link
    Recently, large-scale transformer-based models have been proven to be effective over a variety of tasks across many domains. Nevertheless, putting them into production is very expensive, requiring comprehensive optimization techniques to reduce inference costs. This paper introduces a series of transformer inference optimization techniques that are both in algorithm level and hardware level. These techniques include a pre-padding decoding mechanism that improves token parallelism for text generation, and highly optimized kernels designed for very long input length and large hidden size. On this basis, we propose a transformer inference acceleration library -- Easy and Efficient Transformer (EET), which has a significant performance improvement over existing libraries. Compared to Faster Transformer v4.0's implementation for GPT-2 layer on A100, EET achieves a 1.5-4.5x state-of-art speedup varying with different context lengths. EET is available at https://github.com/NetEase-FuXi/EET. A demo video is available at https://youtu.be/22UPcNGcErg
    • …
    corecore